Biomedical knowledge graphs (KG) are heterogenous networks consisting of biological entities as nodes and relations between them as edges. These entities and relations are extracted from millions of research papers and unified in a single resource. The goal of biomedical multi-hop question-answering over knowledge graph (KGQA) is to help biologist and scientist to get valuable insights by asking questions in natural language. Relevant answers can be found by first understanding the question and then querying the KG for right set of nodes and relationships to arrive at an answer. To model the question, language models such as RoBERTa and BioBERT are used to understand context from natural language question. One of the challenges in KGQA is missing links in the KG. Knowledge graph embeddings (KGE) help to overcome this problem by encoding nodes and edges in a dense and more efficient way. In this paper, we use a publicly available KG called Hetionet which is an integrative network of biomedical knowledge assembled from 29 different databases of genes, compounds, diseases, and more. We have enriched this KG dataset by creating a multi-hop biomedical question-answering dataset in natural language for testing the biomedical multi-hop question-answering system and this dataset will be made available to the research community. The major contribution of this research is an integrated system that combines language models with KG embeddings to give highly relevant answers to free-form questions asked by biologists in an intuitive interface. Biomedical multi-hop question-answering system is tested on this data and results are highly encouraging.
translated by 谷歌翻译
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译
我们开发了一种贝叶斯方法,以预测从具有多通道(即多维张量)结构的多个来源收集的数据的连续或二元结果。作为一个激励示例,我们将来自多个'Omics源的分子数据考虑在多个发育时间点上测量,作为恒河猴模型中早期铁缺乏症(ID)的预测指标。我们在系数上使用具有低级别结构的线性模型来捕获多路依赖性,并在每个源分别对系数的方差进行建模以推断其相对贡献。共轭先验促进了有效的吉布斯采样算法以进行后推理,假设有正常误差的连续结果或具有概率链接的二元结果。模拟表明,我们的模型在错误分类速率和估计系数与真实系数的相关性方面的性能如预期的,在考虑到不同来源的不同信号大小时,通过合并多路结构和适度的增长,可以通过稳定的性能增长。此外,它为我们的激励应用提供了可靠的ID猴子分类。以R代码形式的软件可在https://github.com/biostatskim/bayesmsmw上获得。
translated by 谷歌翻译
通过一系列联邦举措和命令,美国政府一直在努力确保美国在AI中的领导。这些广泛的战略文件影响了美国空军美国部(DAF)等组织。DAF-MIT AI加速器是DAF和MIT之间的一项计划,以弥合AI研究人员与DAF任务要求之间的差距。DAF-MIT AI加速器支持的几个项目正在开发公共挑战问题,这些问题解决了许多联邦AI研究的重点。这些挑战是通过公开可用的大型AI-Ready数据集,激励开源解决方案,并为可以激发进一步研究的双重使用技术创建需求信号,来针对优先事项。在本文中,我们描述了正在开发的这些公共挑战以及它们的应用如何促进科学进步。
translated by 谷歌翻译
人类行为越来越多地在移动设备上捕获,从而增加了对自动人类活动识别的兴趣。但是,现有数据集通常由脚本运动组成。我们的长期目标是在自然环境中执行移动活动识别。我们收集一个数据集,以支持与下游任务(例如健康监测和干预)相关的活动类别。由于人类行为中存在巨大的差异,因此我们收集了两个不同年龄段的许多参与者的数据。由于人类行为会随着时间的流逝而改变,因此我们还在一个月的时间内收集参与者的数据以捕捉时间漂移。我们假设移动活动识别可以受益于无监督的域适应算法。为了满足这一需求并检验这一假设,我们分析了整个人和整个时间的域适应性的性能。然后,我们通过对比度学习来增强无监督的域适应性,并在可用标签比例时进行弱监督。该数据集可在https://github.com/wsu-casas/smartwatch-data上找到
translated by 谷歌翻译
灵长类动物的视觉系统是强大感知的黄金标准。因此,人们普遍认为,模仿这些系统基础的神经表现形式将产生具有对手稳健的人工视觉系统。在这项工作中,我们开发了一种直接对灵长类动物大脑活动进行对抗性视觉攻击的方法。然后,我们利用这种方法来证明上述信念可能不是很好的基础。具体而言,我们报告说,组成灵长类动物视觉系统的生物神经元表现出对对抗性扰动的敏感性,这些扰动与现有(训练有素的)人工神经网络相当。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
大型语言模型可以编码有关世界的大量语义知识。这种知识对于旨在采取自然语言表达的高级,时间扩展的指示的机器人可能非常有用。但是,语言模型的一个重大弱点是,它们缺乏现实世界的经验,这使得很难利用它们在给定的体现中进行决策。例如,要求语言模型描述如何清洁溢出物可能会导致合理的叙述,但是它可能不适用于需要在特定环境中执行此任务的特定代理商(例如机器人)。我们建议通过预处理的技能来提供现实世界的基础,这些技能用于限制模型以提出可行且在上下文上适当的自然语言动作。机器人可以充当语​​言模型的“手和眼睛”,而语言模型可以提供有关任务的高级语义知识。我们展示了如何将低级技能与大语言模型结合在一起,以便语言模型提供有关执行复杂和时间扩展说明的过程的高级知识,而与这些技能相关的价值功能则提供了连接必要的基础了解特定的物理环境。我们在许多现实世界的机器人任务上评估了我们的方法,我们表明了对现实世界接地的需求,并且这种方法能够在移动操纵器上完成长远,抽象的自然语言指令。该项目的网站和视频可以在https://say-can.github.io/上找到。
translated by 谷歌翻译
非神经和神经生物系统都可以学习。因此,与其专注于纯粹类似大脑的学习,不如在研究物理系统中学习学习。这样的努力包括平衡传播(EP)和耦合学习(CL),它们需要存储两个不同的状态 - 自由状态以及扰动的状态,以保留有关梯度的信息。受粘液模具的启发,我们提出了一种植根于化学信号传导的新学习算法,该算法不需要两个不同的状态。相反,输出误差信息是以与激活/前馈信号相似的化学信号中的化学信号编码。稳态反馈化学浓度以及激活信号在本地存储所需的梯度信息。我们使用物理,线性流网络应用算法,并使用具有93%精度的虹膜数据集对其进行测试。我们还证明我们的算法执行梯度下降。最后,除了将我们的算法与EP和CL进行比较外,我们还解决了该算法的生物学合理性。
translated by 谷歌翻译
We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. We use an autoregressive large language model (OpenAI's text-davinci-003) to determine if proposed U.S. Congressional bills are relevant to specific public companies and provide explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model, which outperforms the baseline of predicting the most common outcome of irrelevance. However, we test the ability to determine the relevance of a bill with the previous OpenAI GPT-3 model (text-davinci-002), which was state-of-the-art on many language tasks until text-davinci-003 was released on November 28, 2022. The performance of text-davinci-002 is worse than simply always predicting that a bill is irrelevant to a company. These results suggest that, as large language models continue to improve core natural language understanding capabilities, performance on corporate lobbying related tasks will continue to improve. We then discuss why this could be problematic for societal-AI alignment.
translated by 谷歌翻译